91 research outputs found

    Reasoning about LTL Synthesis over finite and infinite games

    Get PDF
    In the last few years, research formal methods for the analysis and the verification of properties of systems has increased greatly. A meaningful contribution in this area has been given by algorithmic methods developed in the context of synthesis. The basic idea is simple and appealing: instead of developing a system and verifying that it satisfies its specification, we look for an automated procedure that, given the specification returns a system that is correct by construction. Synthesis of reactive systems is one of the most popular variants of this problem, in which we want to synthesize a system characterized by an ongoing interaction with the environment. In this setting, large effort has been devoted to analyze specifications given as formulas of linear temporal logic, i.e., LTL synthesis. Traditional approaches to LTL synthesis rely on transforming the LTL specification into parity deterministic automata, and then to parity games, for which a so-called winning region is computed. Computing such an automaton is, in the worst-case, double-exponential in the size of the LTL formula, and this becomes a computational bottleneck in using the synthesis process in practice. The first part of this thesis is devoted to improve the solution of parity games as they are used in solving LTL synthesis, trying to give efficient techniques, in terms of running time and space consumption, for solving parity games. We start with the study and the implementation of an automata-theoretic technique to solve parity games. More precisely, we consider an algorithm introduced by Kupferman and Vardi that solves a parity game by solving the emptiness problem of a corresponding alternating parity automaton. Our empirical evaluation demonstrates that this algorithm outperforms other algorithms when the game has a small number of priorities relative to the size of the game. In many concrete applications, we do indeed end up with parity games where the number of priorities is relatively small. This makes the new algorithm quite useful in practice. We then provide a broad investigation of the symbolic approach for solving parity games. Specifically, we implement in a fresh tool, called SPGSolver, four symbolic algorithms to solve parity games and compare their performances to the corresponding explicit versions for different classes of games. By means of benchmarks, we show that for random games, even for constrained random games, explicit algorithms actually perform better than symbolic algorithms. The situation changes, however, for structured games, where symbolic algorithms seem to have the advantage. This suggests that when evaluating algorithms for parity-game solving, it would be useful to have real benchmarks and not only random benchmarks, as the common practice has been. LTL synthesis has been largely investigated also in artificial intelligence, and specifically in automated planning. Indeed, LTL synthesis corresponds to fully observable nondeterministic planning in which the domain is given compactly and the goal is an LTL formula, that in turn is related to two-player games with LTL goals. Finding a strategy for these games means to synthesize a plan for the planning problem. The last part of this thesis is then dedicated to investigate LTL synthesis under this different view. In particular, we study a generalized form of planning under partial observability, in which we have multiple, possibly infinitely many, planning domains with the same actions and observations, and goals expressed over observations, which are possibly temporally extended. By building on work on two-player games with imperfect information in the Formal Methods literature, we devise a general technique, generalizing the belief-state construction, to remove partial observability. This reduces the planning problem to a game of perfect information with a tight correspondence between plans and strategies. Then we instantiate the technique and solve some generalized planning problems

    Solving Parity Games in Scala

    Get PDF
    Parity games are two-player games, played on directed graphs, whose nodes are labeled with priorities. Along a play, the maximal priority occurring infinitely often determines the winner. In the last two decades, a variety of algorithms and successive optimizations have been proposed. The majority of them have been implemented in PGSolver, written in OCaml, which has been elected by the community as the de facto platform to solve efficiently parity games as well as evaluate their performance in several specific cases. PGSolver includes the Zielonka Recursive Algorithm that has been shown to perform better than the others in randomly generated games. However, even for arenas with a few thousand of nodes (especially over dense graphs), it requires minutes to solve the corresponding game. In this paper, we deeply revisit the implementation of the recursive algorithm introducing several improvements and making use of Scala Programming Language. These choices have been proved to be very successful, gaining up to two orders of magnitude in running time

    Improving parity games in practice

    Get PDF
    Parity games are infinite-round two-player games played on directed graphs whose nodes are labeled with priorities. The winner of a play is determined by the smallest priority (even or odd) that is encountered infinitely often along the play. In the last two decades, several algorithms for solving parity games have been proposed and implemented in PGSolver, a platform written in OCaml. PGSolver includes the Zielonka’s recursive algorithm (RE, for short) which is known to be the best performing one over random games. Notably, several attempts have been carried out with the aim of improving the performance of RE in PGSolver, but with small advances in practice. In this work, we deeply revisit the implementation of RE by dealing with the use of specific data structures and programming languages such as Scala, Java, C++, and Go. Our empirical evaluation shows that these choices are successful, gaining up to three orders of magnitude in running time over the classic version of the algorithm implemented in PGSolver

    Pure-Past Linear Temporal and Dynamic Logic on Finite Traces

    Get PDF
    LTLf and LDLf are well-known logics on finite traces. We review PLTLf and PLDLf, their pure- past versions. These are interpreted backward from the end of the trace towards the beginning. Because of this, we can exploit a foundational result on reverse languages to get an exponential improvement, wrt LTLf /LDLf, in computing the corresponding DFA. This exponential improvement is reflected in several forms sequential decision making involving temporal specifications, such as planning and decision problems in non-deterministic and non-Markovian domains. Interestingly, PLTLf (resp. PLDLf ) has the same expressive power as LTLf (resp. LDLf ), but transforming a PLTLf (resp. PLDLf ) formula into its equivalent in LTLf (resp. LDLf ) is quite expensive. Hence, to take advantage of the exponential improvement, properties of interest must be directly expressed in PLTLf /PLTLf

    Two-Stage Technique for LTLf Synthesis Under LTL Assumptions

    Get PDF
    In synthesis, assumption are constraints on the environments that rule out certain environment behaviors. A key observation is that even if we consider system with LTLf goals on finite traces, assumptions need to be expressed considering infinite traces, using LTL on infinite traces, since the decision to stop the trace is controlled by the agent. To solve synthesis of LTLf goals under LTL assumptions, we could reduce the problem to LTL synthesis. Unfortunately, while synthesis in LTLf and in LTL have the same worst-case complexity (both are 2EXPTIME-complete), the algorithms available for LTL synthesis are much harder in practice than those for LTLf synthesis. Recently, it has been shown that in basic forms of fairness and stability assumptions we can avoid such a detour to LTL and keep the simplicity of LTLf synthesis. In this paper, we generalize these results and show how to effectively handle any kind of LTL assumptions. Specifically, we devise a two-stage technique for solving LTLf under general LTL assumptions and show empirically that this technique performs much better than standard LTL synthesis

    Application of openBIM for the Management of Existing Railway Infrastructure: Case Study of the Cancello - Benevento Railway Line

    Get PDF
    In the field of infrastructure, the development and application of the openBIM (open Building Information Modeling) approach and related standards (principally Industry Foundation Classes) remain limited with regard to processes in O&M (Operation and Maintenance) phases, as well as the broader context of AM (Asset Management). This article deals with the activities carried out as part of a pilot project based on the need to manage the operation and assess the condition and value of existing infrastructure along the Cancello–Benevento railway line. The principal goal was to systematize information by digitalizing the infrastructure, in order to enable the assessment of possible performance gaps (compared to national railway standards) in the event of integration within the national infrastructure. In compliance with the project requirements, a digitalization strategy was designed for the definition of surveying activities and the implementation of openBIM systems for the development of an object library and a federated digital model, structured within the collaborative platform that was used, and allowing management, maintenance, and subsequent financial evaluation in the broader context of asset management. The project involved the collaboration of railway operators, a university, and a software company that implemented innovative concepts concerning IFC (specifically, IFC4x2 was used) through the development of dedicated software solutions. The digital solution we proposed enabled the use of digital models as access keys to survey and maintenance information (ERP platforms used by the railway operators) that was available in real time. This project was nominated at the buildingSMART awards 2021 and was one of three finalists in the “Asset Management Using openBIM” category

    Pain-motor integration in the primary motor cortex in Parkinson's disease

    Get PDF
    In Parkinson's disease (PD), the influence of chronic pain on motor features has never been investigated. We have recently designed a technique that combines nociceptive system activation by laser stimuli and primary motor cortex (M1) activation through transcranial magnetic stimulation (TMS), in a laser-paired associative stimulation design (Laser-PAS). In controls, Laser-PAS induces long-term changes in motor evoked potentials reflecting M1 long-term potentiation-like plasticity, arising from pain-motor integration

    Measurement of Oral Epithelial Thickness by Optical Coherence Tomography

    Get PDF
    Optical coherence tomography (OCT) is a real-time, in-situ, non-invasive imaging device that is able to perform a cross-sectional evaluation of tissue microstructure based on the specific intensity of back-scattered and reflected light. The aim of the present study was to define normal values of epithelial thickness within the oral cavity. OCT measurements of epithelial thickness were performed in 28 healthy patients at six different locations within the oral cavity. Image analysis was performed using Image J 1.52 software. The healthy epithelium has a mean thickness of 335.59 ± 150.73 µm. According to its location within the oral cavity, the epithelium showed highest values in the region of the buccal mucosa (659.79 µm) and the thinnest one was observed in the mouth’s floor (100.07 µm). OCT has been shown to be useful for the evaluation of oral mucosa in vivo and in real time. Our study provides reference values for the epithelial thickness of multiple sites within the oral cavity. Knowledge of the thickness values of healthy mucosa is, therefore, of fundamental importance

    Corticobasal syndrome: neuroimaging and neurophysiological advances

    Get PDF
    Corticobasal degeneration (CBD) is a neurodegenerative condition characterized by 4R-tau protein deposition in several brain regions that clinically manifests itself as a heterogeneous atypical parkinsonism typically expressing in the adulthood. The prototypical clinical phenotype of CBD is corticobasal syndrome (CBS). Important insights into the pathophysiological mechanisms underlying motor and higher cortical symptoms in CBS have been gained by using advanced neuroimaging and neurophysiological techniques. Structural and functional neuroimaging studies often showed asymmetric cortical and subcortical abnormalities, mainly involving perirolandic and parietal regions and basal ganglia structures. Neurophysiological investigations including electroencephalography and somatosensory evoked potentials provided useful information on the origin of myoclonus and on cortical sensory loss. Transcranial magnetic stimulation demonstrated heterogeneous and asymmetric changes in the excitability and plasticity of primary motor cortex and abnormal hemispheric connectivity. Neuroimaging and neurophysiological abnormalities in multiple brain areas reflect the asymmetric neurodegeneration, leading to the asymmetric motor and higher cortical symptoms in CBS. This article is protected by copyright. All rights reserved
    • …
    corecore